This markdown describes analyses using the R packages meta (Schwarzer, Carpenter, and Rücker 2015), metafor (Viechtbauer 2010) and functions described in Harrer & Ebert 2018.
Knapp-Hartung-adjustments are used.
library(meta)
library(metafor)
library(knitr)
library(kableExtra)
library(dplyr)
library(esc)
load(file="Meta_Analysis_Data.RData")
m<-metagen(TE,
seTE,
data=Meta_Analysis_Data,
studlab=paste(Author),
comb.fixed = FALSE,
method.tau = "DL",
hakn=TRUE,
prediction=TRUE)
Forest Plot
forest(m,prediction = TRUE,
col.predict = "blue",
fontsize=8,spacing=0.6,
squaresize = 0.8,
weight.study = "random",
xlab = "Hedges'g",
xlim=c(-0.5,2.0),
leftlabs = c("Study”,”g”,”SE"),
label.left = "favors control",
print.I2.ci = TRUE,
print.zval = TRUE,
digits.se = 3,
print.tau2 = FALSE,
sortvar = TE)
As a sensitivity analysis, the pooling is repeated while excluding the identified outliers Danitz & Orsillo and Shapiro (study 3 and study 16). See “Outlier Detection & Influence Analysis”.
ma.data.or<-as.data.frame(Meta_Analysis_Data[-c(3,16),])
m.ma.data.or<-metagen(TE,
seTE,
data=ma.data.or,
studlab=paste(Author),
comb.fixed = FALSE,
method.tau = "DL",
hakn=TRUE,
prediction=TRUE)
Forest Plot
forest(m.ma.data.or,prediction = TRUE,
col.predict = "blue",
fontsize=8,spacing=0.6,
squaresize = 0.8,
weight.study = "random",
xlab = "Hedges'g",
xlim=c(-0.5,2.0),
leftlabs = c("Study”,”g”,”SE"),
label.left = "favors control",
print.I2.ci = TRUE,
print.zval = TRUE,
digits.se = 3,
print.tau2 = TRUE)
In this sensitivity analysis, only studies with a low risk of bias are included.
ma.data.lrb <- filter(Meta_Analysis_Data, RoB == "low")
m.ma.data.lrb<-metagen(TE,
seTE,
data=ma.data.lrb,
studlab=paste(Author),
comb.fixed = FALSE,
method.tau = "DL",
hakn=TRUE,
prediction=TRUE)
m.ma.data.lrb
## 95%-CI %W(random)
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 9.2
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 17.8
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 14.0
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 9.3
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 8.3
## Hintz et al. 0.2840 [-0.0453; 0.6133] 11.6
## Kang et al. 1.2751 [ 0.6142; 1.9360] 3.8
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 7.2
## Phang et al. 0.5407 [ 0.0619; 1.0196] 6.5
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 6.0
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 6.3
##
## Number of studies combined: k = 11
##
## 95%-CI t p-value
## Random effects model 0.4343 [0.2800; 0.5885] 6.27 < 0.0001
## Prediction interval [0.1340; 0.7345]
##
## Quantifying heterogeneity:
## tau^2 = 0.0128; H = 1.16 [1.00; 1.65]; I^2 = 25.5% [0.0%; 63.1%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 13.42 10 0.2012
##
## Details on meta-analytical method:
## - Inverse variance method
## - DerSimonian-Laird estimator for tau^2
## - Hartung-Knapp adjustment for random effects model
Forest Plot
forest(m.ma.data.lrb,
prediction = TRUE,
col.predict = "blue",
fontsize=8,spacing=0.6,
squaresize = 0.8,
weight.study = "random",
xlab = "Hedges'g",
xlim=c(-0.5,2.0),
leftlabs = c("Study”,”g”,”SE"),
label.left = "favors control",
print.I2.ci = TRUE,
print.zval = TRUE,
digits.se = 3,
print.tau2 = TRUE)
A study is defined as outlying when its 95%CI lies entire outside the one of the pooled effect.
source("spot.outliers.R")
spot.outliers.random(data = m)
| Author | lowerci | upperci |
|---|---|---|
| DanitzOrsillo | 1.1138668 | 2.468473 |
| Shapiro et al. | 0.8617853 | 2.097667 |
m.inf<-metainf(m,pooled="random")
forest(metainf(m),
rightcols = c("TE","I2"),
rightlabs=c("g","I-squared"),
col.predict = "black",
fontsize=8,spacing=0.6,
squaresize = 0.8,
xlab = "Hedges'g",
xlim=c(0,2),
label.left = "favors control",
digits.I2 = 2,
sortvar = I2)
m.metafor<-rma(TE,seTE,data=Meta_Analysis_Data, method = "DL",test = "knha")
inf <- influence(m.metafor)
plot(inf)
Baujat-Plot (Baujat et al. 2002)
baujat(m,
studlab = TRUE,
cex.studlab = 1)
Study 3 (Danitz & Orsillo, 2014) and Study 16 (Shapiro et al., 2011) are identified as outliers.
m.eggers<-metabias(m, method.bias = "linreg")
m.eggers
##
## Linear regression test of funnel plot asymmetry
##
## data: m
## t = 4.677, df = 16, p-value = 0.0002525
## alternative hypothesis: asymmetry in funnel plot
## sample estimates:
## bias se.bias slope
## 4.1111350 0.8790029 -0.3407464
What it the CI of the intercept (bias)?
The ULCI of the intercept is
## [1] 5.833981
Das LLCI of the intercept is
## [1] 2.388289
#Print Funnel-Plot with study labels
funnel(m,
comb.random=TRUE,
pch=16,
studlab=TRUE,
cex.studlab = 0.5,
contour=c(0.9, 0.95, 0.99),
xlab="Hedges' g",
col.contour=c("darkgray", "gray","lightgray"))
legend(0.7,0,
c("0.1 > p > 0.05", "0.05 > p > 0.01", "p< 0.01"),
cex = 0.8,
fill=c("darkgray", "gray","lightgray"),
bty="n")
#Print Funnel-Plot without study labels
funnel(m,
comb.random=TRUE,
pch=16, studlab=FALSE,
cex.studlab = 0.5,
contour=c(0.9, 0.95, 0.99),
xlab="Hedges' g",
col.contour=c("darkgray", "gray","lightgray"))
legend(0.7,0,
c("0.1 > p > 0.05", "0.05 > p > 0.01", "p< 0.01"),
cex = 0.8,
fill=c("darkgray", "gray","lightgray"),
bty="n")
#Duval & Tweedie Trim-and-fill
trimfillm<-trimfill(m)
print(trimfillm)
## 95%-CI %W(random)
## Call et al. 0.7091 [ 0.1979; 1.2203] 3.7
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 4.3
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 3.0
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 4.9
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 4.7
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 4.3
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 4.0
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 4.2
## Hintz et al. 0.2840 [-0.0453; 0.6133] 4.5
## Kang et al. 1.2751 [ 0.6142; 1.9360] 3.1
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 4.3
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 4.0
## Phang et al. 0.5407 [ 0.0619; 1.0196] 3.9
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 3.7
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 3.0
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 3.3
## SongLindquist 0.6126 [ 0.1683; 1.0569] 4.0
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 3.8
## Filled: Warnecke et al. 0.0520 [-0.4360; 0.5401] 3.8
## Filled: SongLindquist 0.0395 [-0.4048; 0.4837] 4.0
## Filled: Frogeli et al. 0.0220 [-0.3621; 0.4062] 4.3
## Filled: Call et al. -0.0571 [-0.5683; 0.4541] 3.7
## Filled: Gallego et al. -0.0729 [-0.5132; 0.3675] 4.0
## Filled: Kang et al. -0.6230 [-1.2839; 0.0379] 3.1
## Filled: Shapiro et al. -0.8277 [-1.4456; -0.2098] 3.3
## Filled: DanitzOrsillo -1.1391 [-1.8164; -0.4618] 3.0
##
## Number of studies combined: k = 26 (with 8 added studies)
##
## 95%-CI t p-value
## Random effects model 0.3419 [ 0.1068; 0.5770] 3.00 0.0061
## Prediction interval [-0.5128; 1.1966]
##
## Quantifying heterogeneity:
## tau^2 = 0.1585; H = 2.05 [1.70; 2.47]; I^2 = 76.2% [65.4%; 83.7%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 105.15 25 < 0.0001
##
## Details on meta-analytical method:
## - Inverse variance method
## - DerSimonian-Laird estimator for tau^2
## - Hartung-Knapp adjustment for random effects model
## - Trim-and-fill method to adjust for funnel plot asymmetry
Funnel plot with imputed studies
# with study labels
funnel(trimfillm, comb.random=TRUE, studlab=TRUE, cex.studlab = 0.5,contour=c(0.9, 0.95, 0.99),xlab="Hedges' g", col.contour=c("darkgray", "gray","lightgray"))
legend(0.7,0,c("0.1 > p > 0.05", "0.05 > p > 0.01", "p< 0.01"),cex = 0.8, fill=c("darkgray", "gray","lightgray"), bty="n")
# without study labels
funnel(trimfillm, comb.random=TRUE, studlab=FALSE, cex.studlab = 0.5,contour=c(0.9, 0.95, 0.99),xlab="Hedges' g", col.contour=c("darkgray", "gray","lightgray"))
legend(0.7,0,c("0.1 > p > 0.05", "0.05 > p > 0.01", "p< 0.01"),cex = 0.8, fill=c("darkgray", "gray","lightgray"), bty="n")
Publication bias (selective reporting) is now inspected through P-Curves (Simonsohn, Nelson, and Simmons 2014; Simonsohn, Simmons, and Nelson 2015; Aert, Wicherts, and Assen 2016).
load(file = "pcurvedata.RData")
# pcurvedata$t_obs<-round(pcurvedata$t_obs, digits = 2)
# tot<-data.frame(paste("t(",pcurvedata$df_obs,")=",pcurvedata$t_obs))
# colnames(tot)<-c("output")
# tot$output<-gsub(" ", "", tot$output, fixed = TRUE)
# totoutput<-as.character(tot$output)
# print(tot, row.names = FALSE)
# write(totoutput,ncolumns=1, file="input.txt")
# Get working directory path
# wd<-getwd()
# source("pcurve_app.R")
# pcurve_app("input.txt", wd)
knitr::include_graphics("1.png")
knitr::include_graphics("3.png")
Estimation of the true effect using P-Curve
load(file="pcurvedata.RData")
source("plotloss.R")
t_obs<-pcurvedata$t_obs
df_obs<-pcurvedata$df_obs
#estimation
plotloss(t_obs=t_obs,df_obs=df_obs,dmin=-.5,dmax=2)
## [1] 0.3645887
Estimation of the true effect using P-Curve - without outliers
According to Van Aert (Aert, Wicherts, and Assen 2016), effect size estimates using p-curve may be distorted if heterogeneity is high (i.e., \(I^{2}>\) 50%). This is the case in the main analysis. Therefore, the effect size is estimated again without outliers.
load(file="pcurvedata.RData")
source("plotloss.R")
pcurvedata<-as.data.frame(pcurvedata[-c(3,15),])
t_obs<-pcurvedata$t_obs
df_obs<-pcurvedata$df_obs
#estimation
plotloss(t_obs=t_obs,df_obs=df_obs,dmin=-.5,dmax=2)
## [1] 0.3321734
source("subgroup.analysis.mixed.effects.R")
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$RoB,
n.sg = 2,
subgroup1 = "low",
subgroup2 = "high")
## 95%-CI %W(fixed) meta
## 1 0.4343 [0.2986; 0.5700] 90.7 2
## 2 0.8104 [0.3871; 1.2337] 9.3 1
##
## Number of studies combined: k = 2
##
## 95%-CI z p-value
## Fixed effect model 0.4693 [0.3401; 0.5985] 7.12 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0450; H = 1.66; I^2 = 63.6% [0.0%; 91.7%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 2.75 1 0.0972
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = high 1 0.8104 [0.3871; 1.2337] 0.00 -- --
## meta = low 1 0.4343 [0.2986; 0.5700] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 2.75 1 0.0972
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$Control,
n.sg = 3,
subgroup1 = "WLC",
subgroup2 = "no intervention",
subgroup3 = "information only")
## 95%-CI %W(fixed) meta
## 1 0.7717 [0.3653; 1.1782] 8.7 3
## 2 0.5025 [0.2703; 0.7346] 26.6 2
## 3 0.4016 [0.2528; 0.5504] 64.7 1
##
## Number of studies combined: k = 3
##
## 95%-CI z p-value
## Fixed effect model 0.4605 [0.3408; 0.5803] 7.54 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0073; H = 1.22 [1.00; 3.78]; I^2 = 32.9% [0.0%; 93.0%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 2.98 2 0.2254
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = information only 1 0.4016 [0.2528; 0.5504] 0.00 -- --
## meta = no intervention 1 0.5025 [0.2703; 0.7346] 0.00 -- --
## meta = WLC 1 0.7717 [0.3653; 1.1782] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 2.98 2 0.2254
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$`intervention duration`,
n.sg = 2,
subgroup1 = "short",
subgroup2 = "long")
## 95%-CI %W(fixed) meta
## 1 0.4691 [0.2437; 0.6944] 60.6 2
## 2 0.7427 [0.4634; 1.0220] 39.4 1
##
## Number of studies combined: k = 2
##
## 95%-CI z p-value
## Fixed effect model 0.5769 [0.4016; 0.7523] 6.45 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0207; H = 1.49; I^2 = 55.2% [0.0%; 89.1%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 2.23 1 0.1350
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = long 1 0.7427 [0.4634; 1.0220] 0.00 -- --
## meta = short 1 0.4691 [0.2437; 0.6944] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 2.23 1 0.1350
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$`intervention type`,
n.sg = 3,
subgroup1 = "mindfulness",
subgroup2 = "ACT",
subgroup3 = "PCI")
## 95%-CI %W(fixed) meta
## 1 0.5578 [0.3608; 0.7547] 30.9 2
## 2 0.9046 [0.0994; 1.7098] 1.9 1
## 3 0.3631 [0.2295; 0.4967] 67.2 3
##
## Number of studies combined: k = 3
##
## 95%-CI z p-value
## Fixed effect model 0.4333 [0.3238; 0.5429] 7.75 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0132; H = 1.40 [1.00; 2.59]; I^2 = 48.9% [0.0%; 85.1%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 3.91 2 0.1415
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = ACT 1 0.9046 [0.0994; 1.7098] 0.00 -- --
## meta = mindfulness 1 0.5578 [0.3608; 0.7547] 0.00 -- --
## meta = PCI 1 0.3631 [0.2295; 0.4967] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 3.91 2 0.1415
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$population,
n.sg = 2,
subgroup1 = "students",
subgroup2 = "undergraduate students")
## 95%-CI %W(fixed) meta
## 1 0.4525 [0.2869; 0.6181] 79.8 1
## 2 0.7095 [0.3807; 1.0383] 20.2 2
##
## Number of studies combined: k = 2
##
## 95%-CI z p-value
## Fixed effect model 0.5045 [0.3566; 0.6524] 6.69 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0154; H = 1.37; I^2 = 46.6%
##
## Test of heterogeneity:
## Q d.f. p-value
## 1.87 1 0.1712
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = students 1 0.4525 [0.2869; 0.6181] 0.00 -- --
## meta = undergraduate students 1 0.7095 [0.3807; 1.0383] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 1.87 1 0.1712
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$`mode of delivery`,
n.sg = 3,
subgroup1 = "group",
subgroup2 = "online",
subgroup3 = "book")
## 95%-CI %W(fixed) meta
## 1 0.7227 [0.4227; 1.0227] 6.0 2
## 2 0.3956 [0.3047; 0.4864] 65.3 3
## 3 0.4650 [0.3281; 0.6019] 28.7 1
##
## Number of studies combined: k = 3
##
## 95%-CI z p-value
## Fixed effect model 0.4351 [0.3617; 0.5085] 11.62 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0070; H = 1.49 [1.00; 2.79]; I^2 = 55.0% [0.0%; 87.1%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 4.44 2 0.1086
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = book 1 0.4650 [0.3281; 0.6019] 0.00 -- --
## meta = group 1 0.7227 [0.4227; 1.0227] 0.00 -- --
## meta = online 1 0.3956 [0.3047; 0.4864] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 4.44 2 0.1086
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$credit,
n.sg = 2,
subgroup1 = "yes",
subgroup2 = "none")
## 95%-CI %W(fixed) meta
## 1 0.5931 [0.2404; 0.9458] 14.2 2
## 2 0.5668 [0.4231; 0.7104] 85.8 1
##
## Number of studies combined: k = 2
##
## 95%-CI z p-value
## Fixed effect model 0.5705 [0.4375; 0.7035] 8.41 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0; H = 1.00; I^2 = 0.0%
##
## Test of heterogeneity:
## Q d.f. p-value
## 0.02 1 0.8922
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = none 1 0.5668 [0.4231; 0.7104] 0.00 -- --
## meta = yes 1 0.5931 [0.2404; 0.9458] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 0.02 1 0.8922
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
subgroup.analysis.mixed.effects(data = m,
sg.var = Meta_Analysis_Data$guidance.guided.unguided,
n.sg = 2,
subgroup1 = "guided",
subgroup2 = "self-guided")
## 95%-CI %W(fixed) meta
## 1 0.6450 [0.4108; 0.8793] 21.4 1
## 2 0.3764 [0.2541; 0.4986] 78.6 2
##
## Number of studies combined: k = 2
##
## 95%-CI z p-value
## Fixed effect model 0.4339 [0.3255; 0.5423] 7.85 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0270; H = 1.99; I^2 = 74.8% [0.0%; 94.3%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 3.97 1 0.0463
##
## Results for subgroups (fixed effect model):
## k 95%-CI Q tau^2 I^2
## meta = guided 1 0.6450 [0.4108; 0.8793] 0.00 -- --
## meta = self-guided 1 0.3764 [0.2541; 0.4986] 0.00 -- --
##
## Test for subgroup differences (fixed effect model):
## Q d.f. p-value
## Between groups 3.97 1 0.0463
## Within groups 0.00 0 --
##
## Details on meta-analytical method:
## - Inverse variance method
Aert, Robbie CM van, Jelte M Wicherts, and Marcel ALM van Assen. 2016. “Conducting Meta-Analyses Based on P Values: Reservations and Recommendations for Applying P-Uniform and P-Curve.” Perspectives on Psychological Science 11 (5). Sage Publications Sage CA: Los Angeles, CA: 713–29.
Baujat, Bertrand, Cédric Mahé, Jean-Pierre Pignon, and Catherine Hill. 2002. “A Graphical Method for Exploring Heterogeneity in Meta-Analyses: Application to a Meta-Analysis of 65 Trials.” Statistics in Medicine 21 (18). Wiley Online Library: 2641–52.
Schwarzer, Guido, James R Carpenter, and Gerta Rücker. 2015. Meta-Analysis with R. Springer.
Simonsohn, Uri, Leif D Nelson, and Joseph P Simmons. 2014. “P-Curve: A Key to the File-Drawer.” Journal of Experimental Psychology: General 143 (2). American Psychological Association: 534.
Simonsohn, Uri, Joseph P Simmons, and Leif D Nelson. 2015. “Better P-Curves: Making P-Curve Analysis More Robust to Errors, Fraud, and Ambitious P-Hacking, a Reply to Ulrich and Miller (2015).” American Psychological Association.
Viechtbauer, Wolfgang. 2010. “Conducting Meta-Analyses in R with the Metafor Package.” Journal of Statistical Software 36 (3). UCLA Statistics.